Your browser doesn't support javascript.
Show: 20 | 50 | 100
Results 1 - 4 de 4
Filter
1.
2021 IEEE International Conference on Big Data, Big Data 2021 ; : 4387-4395, 2021.
Article in English | Scopus | ID: covidwho-1730874

ABSTRACT

COVID-19 is an air-borne viral infection, which infects the respiratory system in the human body, and it became a global pandemic in early March 2020. The damage caused by the COVID-19 disease in a human lung region can be identified using Computed Tomography (CT) scans. We present a novel approach in classifying COVID-19 infection and normal patients using a Random Forest (RF) model to train on a combination of Deep Learning (DL) features and Radiomic texture features extracted from CT scans of patient's lungs. We developed and trained DL models using CNN architectures for extracting DL features. The Radiomic texture features are calculated using CT scans and its associated infection masks. In this work, we claim that the RFs classification using the DL features in conjunction with Radiomic texture features enhances prediction performance. The experiment results show that our proposed models achieve a higher True Positive rate with the average Area Under the Receiver Curve (AUC) of 0.9768, 95% Confidence Interval (CI) [0.9757, 0.9780]. © 2021 IEEE.

2.
50th International Conference on Parallel Processing Workshop, ICPP 2021 ; 2021.
Article in English | Scopus | ID: covidwho-1455753

ABSTRACT

Our goals are to address challenges such as latency, scalability, throughput and heterogeneous data sources of streaming analytics and deep learning pipelines in science sensors and medical imaging applications. We present a prototype Intelligent Parallel Distributed Streaming Framework (IPDSF) that is capable of distributed streaming processing as well as performing distributed deep training in batch mode. IPDSF is designed to run streaming Artificial Intelligent (AI) analytic tasks using data parallelism including partitions of multiple streams of short time sensing data and high-resolution 3D medical images, and fine grain tasks distribution. We will show the implementation of IPDSF for two real world applications, (i) an Air Quality Index based on near real time streaming of aerosol Lidar backscatter and (ii) data generation of Covid-19 Computing Tomography (CT) scans using deep learning. We evaluate the latency, throughput, scalability, and quantitative evaluation of training and prediction compared against a baseline single instance. As the results, IPDSF scales to process thousands of streaming science sensors in parallel for Air Quality Index application. IPDSF uses novel 3D conditional Generative Adversarial Network (cGAN) training using parallel distributed Graphic Processing Units (GPU) nodes to generate realistic 3D high resolution Computed Tomography scans of Covid-19 patient lungs. We will show that IPDSF can reduce cGAN training time linearly with the number of GPUs. © 2021 ACM.

3.
2020 International Conference on Computational Science and Computational Intelligence, CSCI 2020 ; : 858-862, 2020.
Article in English | Scopus | ID: covidwho-1393668

ABSTRACT

Network (cGAN) architecture that is capable of generating 3D Computed Tomography scans in voxels from noisy and/or pixelated approximations and with the potential to generate full synthetic 3D scan volumes. We believe conditional cGAN to be a tractable approach to generate 3D CT volumes, even though the problem of generating full resolution deep fakes is presently impractical due to GPU memory limitations. We present results for autoencoder, denoising, and depixelating tasks which are trained and tested on two novel COVID19 CT datasets. Our evaluation metrics, Peak Signal to Noise ratio (PSNR) range from 12.53 - 46.46 dB, and range from 0.89 to 1. © 2020 IEEE.

4.
2020 Ieee International Conference on Big Data ; : 1216-1225, 2020.
Article in English | Web of Science | ID: covidwho-1324897

ABSTRACT

COVID-19 is a novel infectious disease responsible for over 1.2 million deaths worldwide as of November 2020. The need for rapid testing is a high priority and alternative testing strategies including x-ray image classification are a promising area of research. However, at present, public datasets for COVID-19 x-ray images have low data volumes, making it challenging to develop accurate image classifiers. Several recent papers have made use of Generative Adversarial Networks (GANs) in order to increase the training data volumes. But realistic synthetic COVID-19 x-rays remain challenging to generate. We present a novel Mean Teacher + Transfer GAN (MTT-GAN) that generates COVID-19 chest x-ray images of high quality. In order to create a more accurate GAN, we employ transfer learning from the Kaggle pneumonia x-ray dataset, a highly relevant data source orders of magnitude larger than public COVID-19 datasets. Furthermore, we employ the Mean Teacher algorithm as a constraint to improve stability of training. Our qualitative analysis shows that the MTT-GAN generates x-ray images that are greatly superior to a baseline GAN and visually comparable to real x-rays. Although board-certified radiologists can distinguish MTT-GAN fakes from real COVID-19 x-rays, quantitative analysis shows that MTT-GAN greatly improves the accuracy of both a binary COVID-19 classifier as well as a multi-class pneumonia classifier as compared to a baseline GAN. Our classification accuracy is favorable as compared to recently reported results in the literature for similar binary and multi-class COVID-19 screening tasks.

SELECTION OF CITATIONS
SEARCH DETAIL